4 research outputs found

    Machine-assisted Cyber Threat Analysis using Conceptual Knowledge Discovery

    Get PDF
    Over the last years, computer networks have evolved into highly dynamic and interconnected environments, involving multiple heterogeneous devices and providing a myriad of services on top of them. This complex landscape has made it extremely difficult for security administrators to keep accurate and be effective in protecting their systems against cyber threats. In this paper, we describe our vision and scientific posture on how artificial intelligence techniques and a smart use of security knowledge may assist system administrators in better defending their networks. To that end, we put forward a research roadmap involving three complimentary axes, namely, (I) the use of FCA-based mechanisms for managing configuration vulnerabilities, (II) the exploitation of knowledge representation techniques for automated security reasoning, and (III) the design of a cyber threat intelligence mechanism as a CKDD process. Then, we describe a machine-assisted process for cyber threat analysis which provides a holistic perspective of how these three research axes are integrated together

    Self-management of hybrid networks: Can we trust netflow data?

    No full text
    Network measurement provides vital information on the health of managed networks. The collection of network information can be used for several reasons (e.g., accounting or security) depending on the purpose the collected data will be used for. At the University of Twente (UT), an automatic decision process for hybrid networks that relies on collected network information has been investigated. This approach, called self-management of hybrid networks requires information retrieved from measuring processes in order to automatically decide on establishing/releasing lambda-connections for IP flows that are long in duration and big in volume (known as elephant flows). Nonetheless, the employed measurement technique can break the self-management decisions if the reported information does not accurately describe the actual behavior and characteristics of the observed flows. Within this context, this paper presents an investigation on the trustfulness of measurements performed using the popular NetFlow monitoring solution when elephant flows are especially observed. We primarily focus on the use of NetFlow with sampling in order to collect network information and investigate how reliable such information is for the self-management processes. This is important because the self-management approach decides which flows should be offloaded to the optical level based on the current state of the network and its running flows. We observe three specific flow metrics: octets, packets, and flow duration. Our analysis shows that NetFlow provides reliable information regarding octets and packets. On the other hand, the flow duration reported when sampling is employed tends to be shorter than the actual duration

    Smart Dimensioning of IP Network Links

    No full text
    Link dimensioning is generally considered as an effective and (operationally) simple mechanism to meet (given) performance requirements. In practice, the required link capacity C is often estimated by rules of thumb, such as C=d ·M, where M is the (envisaged) average traffic rate, and d some (empirically determined) constant larger than 1. This paper studies the viability of this class of ‘simplistic’ dimensioning rules. Throughout, the performance criterion imposed is that the fraction of intervals of length T in which the input exceeds the available output capacity (i.e., C·T) should not exceed Δ, for given T and Δ. We first present a dimensioning formula that expresses the required link capacity as a function of M and a variance term V(T), which captures the burstiness on timescale T. We explain how M and V(T) can be estimated with low measurement effort. The dimensioning formula is then used to validate dimensioning rules of the type C=d ·M. Our main findings are: (i) the factor d is strongly affected by the nature of the traffic, the level of aggregation, and the network infrastructure; if these conditions are more or less constant, one could empirically determine d; (ii) we can explicitly characterize how d is affected by the ‘performance parameters’, i.e., T and Δ
    corecore